20 research outputs found
Can LLM-Generated Misinformation Be Detected?
The advent of Large Language Models (LLMs) has made a transformative impact.
However, the potential that LLMs such as ChatGPT can be exploited to generate
misinformation has posed a serious concern to online safety and public trust. A
fundamental research question is: will LLM-generated misinformation cause more
harm than human-written misinformation? We propose to tackle this question from
the perspective of detection difficulty. We first build a taxonomy of
LLM-generated misinformation. Then we categorize and validate the potential
real-world methods for generating misinformation with LLMs. Then, through
extensive empirical investigation, we discover that LLM-generated
misinformation can be harder to detect for humans and detectors compared to
human-written misinformation with the same semantics, which suggests it can
have more deceptive styles and potentially cause more harm. We also discuss the
implications of our discovery on combating misinformation in the age of LLMs
and the countermeasures.Comment: The code, dataset and more resources on LLMs and misinformation will
be released on the project website: https://llm-misinformation.github.io
Combating Misinformation in the Age of LLMs: Opportunities and Challenges
Misinformation such as fake news and rumors is a serious threat on
information ecosystems and public trust. The emergence of Large Language Models
(LLMs) has great potential to reshape the landscape of combating
misinformation. Generally, LLMs can be a double-edged sword in the fight. On
the one hand, LLMs bring promising opportunities for combating misinformation
due to their profound world knowledge and strong reasoning abilities. Thus, one
emergent question is: how to utilize LLMs to combat misinformation? On the
other hand, the critical challenge is that LLMs can be easily leveraged to
generate deceptive misinformation at scale. Then, another important question
is: how to combat LLM-generated misinformation? In this paper, we first
systematically review the history of combating misinformation before the advent
of LLMs. Then we illustrate the current efforts and present an outlook for
these two fundamental questions respectively. The goal of this survey paper is
to facilitate the progress of utilizing LLMs for fighting misinformation and
call for interdisciplinary efforts from different stakeholders for combating
LLM-generated misinformation.Comment: 9 pages for the main paper, 35 pages including 656 references, more
resources on "LLMs Meet Misinformation" are on the website:
https://llm-misinformation.github.io
Research and Development of R290 Less Oil Rotary Compressor
Less oil technique is one of the most important directions of rotary compressor development. This less oil technique research is based on R290 rotary compressor for the purpose of decrease R290 charge amount in room air conditioner to satisfy the demand of IEC standard 60335-2-40. Firstly, the less oil rotary compressor was designed to resolve the oil supply issue, and the CFD simulation of the oil supply structure was validated. Secondly, the oil supply test, compressor performance and reliability tests of less oil compressor compare with original compressor were completed. Thirdly, using the same R290 room air conditioner, the decrease of R290 charge amount was obtained by testing, also the system performance was confirmed. Lastly, the effects of oil charge amount decrease on oil working viscosity and oil film thickness were tested and calculated. The results show that the less oil rotary compressor has good prospect, and the application of less oil technique not only decrease the refrigerant charge amount of R290 room air conditioner effectively but also achieve good performance and reliability of the compressor and system
Layer thickness crossover of type-II multiferroic magnetism in NiI2
The discovery of atomically thin van der Waals ferroelectric and magnetic
materials encourages the exploration of 2D multiferroics, which holds the
promise to understand fascinating magnetoelectric interactions and fabricate
advanced spintronic devices. In addition to building a heterostructure
consisting of ferroelectric and magnetic ingredients, thinning down layered
multiferroics of spin origin such as NiI2 becomes a natural route to realize 2D
multiferroicity. However, the layer-dependent behavior, widely known in the
community of 2D materials, necessitates a rigorous scrutiny of the multiferroic
order in the few-layer limit. Here, we interrogate the layer thickness
crossover of helimagnetism in NiI2 that drives the ferroelectricity and thereby
type-II multiferroicity. By using wavelength-dependent polarization-resolved
optical second harmonic generation (SHG) to probe the ferroic symmetry, we find
that the SHG arises from the inversion-symmetry-breaking magnetic order, not
previously assumed ferroelectricity. This magnetism-induced SHG is only
observed in bilayer or thicker layers, and vanishes in monolayer, suggesting
the critical role of interlayer exchange interaction in breaking the degeneracy
of geometrically frustrated spin structures in triangular lattice and
stabilizing the type-II multiferroic magnetism in few-layers. While the
helimagnetic transition temperature is layer dependent, the few-layer NiI2
exhibits another thickness evolution and reaches the bulk-like behavior in
trilayer, indicated by the intermediate centrosymmetric antiferromagnetic state
as revealed in Raman spectroscopy. Our work therefore highlights the magnetic
contribution to SHG and Raman spectroscopy in reduced dimension and guides the
optical study of 2D multiferroics.Comment: 23 pages, 4 figures, 6 supplementary figure
PromptDA: Label-guided Data Augmentation for Prompt-based Few Shot Learners
Recent advances on large pre-trained language models (PLMs) lead impressive
gains on natural language understanding (NLU) tasks with task-specific
fine-tuning. However, direct fine-tuning PLMs heavily relies on large amount of
labeled instances, which are expensive and time-consuming to obtain.
Prompt-based tuning on PLMs has proven valuable for few shot tasks. Existing
works studying prompt-based tuning for few-shot NLU mainly focus on deriving
proper label words with a verbalizer or generating prompt templates for
eliciting semantics from PLMs. In addition, conventional data augmentation
methods have also been verified useful for few-shot tasks. However, there
currently are few data augmentation methods designed for the prompt-based
tuning paradigm. Therefore, we study a new problem of data augmentation for
prompt-based few shot learners. Since label semantics are helpful in
prompt-based tuning, we propose a novel label-guided data augmentation method
PromptDA which exploits the enriched label semantic information for data
augmentation. Experimental results on several few shot text classification
tasks show that our proposed framework achieves superior performance by
effectively leveraging label semantics and data augmentation in language
understanding
A RESEARCH ON LOAD SPECTRUM STATISTICAL ANALYSIS OF T100C TRAIN BOGIE BASED ON KERNEL DENSITY ESTIMATION ALGORITHM
Traditional train load spectrum statistics are all based on actual measurements,using Weibull distribution to fit test results and calculate load at different levels. But the required testing mileage is long and the entire process is timeconsuming. It is also difficult to fit Weibull parameters. More importantly,the final results are not necessarily reflect the real characteristic of a sample due to the assuming of load distribution beforehand. In order to overcome the disadvantages of traditional load spectrum statistical methods,in this paper,we use kernel density estimation techniques in statistical work of side frame load spectrum for SRM80 full ballast cleaning machine’s T100 C new bogie,which is based on the use of rain-flow counting method in time-load history statistics. Results show that the kernel density estimation method can restore the original load spectrum,remedy the problems of shortage of small sample data and save manpower and resources for train load spectrum statistical work
Transient liquid phase bonding with Ga-based alloys for electronics interconnections
Significant demands on electronics packaging are driven by miniaturization, energy efficiency, high performance, and minimal carbon footprint. Liquid Ga and Ga-based alloys as interconnect materials exhibit a great potential in these applications due to the benefits including low melting points, non-toxicity, and stable high melting point intermetallic compounds (IMCs) formed with other metals such as Cu. Taking advantages of transient liquid phase bonding (TLPB) technology, the uses of Ga-based alloys can be attractive as a more efficient and environmentally friendly method for interconnections. This study aims to address the technical challenges in the applications of Ga and Ga-based alloys for Cu bonding through the TLPB process. These include solderability of Ga-based solders on Cu, interfacial reactions during bonding, as well as the microstructure and properties of the resultant joints. In addition, the porosity and shear strength of the joints have been evaluated for better understanding of the reliability of the joints. The results show that Ga-based solders present an acceptable solderability given their wetting behavior observed. The constituent of the resultant joints consists of θ − CuGa2 and γ3 − Cu9Ga4 IMC phases as identified, which are formed as the results of interfacial reactions between liquid Ga-based alloys and Cu substrate. Indium, as an alloying element in the solders, has been found to be able to accelerate γ3 − Cu9Ga4 formation and reduce the porosity in the joints. It is also found that the pressure applied during TLPB also helps the reduction of porosity, as well as the increase of mechanical strength of the joints. These results provided not only new insight into the mechanism of the interfacial reaction between Ga-based liquid metals (GLMs) and solid Cu systems, but also a new manufacturing route for electronics integration.</p
Artificial Intelligence Algorithms for Treatment of Diabetes
Artificial intelligence (AI) algorithms can provide actionable insights for clinical decision-making and managing chronic diseases. The treatment and management of complex chronic diseases, such as diabetes, stands to benefit from novel AI algorithms analyzing the frequent real-time streaming data and the occasional medical diagnostics and laboratory test results reported in electronic health records (EHR). Novel algorithms are needed to develop trustworthy, responsible, reliable, and robust AI techniques that can handle the imperfect and imbalanced data of EHRs and inconsistencies or discrepancies with free-living self-reported information. The challenges and applications of AI for two problems in the healthcare domain were explored in this work. First, we introduced novel AI algorithms for EHRs designed to be fair and unbiased while accommodating privacy concerns in predicting treatments and outcomes. Then, we studied the innovative approach of using machine learning to improve automated insulin delivery systems through analyzing real-time information from wearable devices and historical data to identify informative trends and patterns in free-living data. Application examples in the treatment of diabetes demonstrate the benefits of AI tools for medical and health informatics